279 research outputs found

    Deep Reinforcement Learning for Concentric Tube Robot Path Planning

    Full text link
    As surgical interventions trend towards minimally invasive approaches, Concentric Tube Robots (CTRs) have been explored for various interventions such as brain, eye, fetoscopic, lung, cardiac and prostate surgeries. Arranged concentrically, each tube is rotated and translated independently to move the robot end-effector position, making kinematics and control challenging. Classical model-based approaches have been previously investigated with developments in deep learning based approaches outperforming more classical approaches in both forward kinematics and shape estimation. We propose a deep reinforcement learning approach to control where we generalise across two to four systems, an element not yet achieved in any other deep learning approach for CTRs. In this way we explore the likely robustness of the control approach. Also investigated is the impact of rotational constraints applied on tube actuation and the effects on error metrics. We evaluate inverse kinematics errors and tracking error for path following tasks and compare the results to those achieved using state of the art methods. Additionally, as current results are performed in simulation, we also investigate a domain transfer approach known as domain randomization and evaluate error metrics as an initial step towards hardware implementation. Finally, we compare our method to a Jacobian approach found in literature.Comment: 13 pages, 13 figures. Accepted to IEEE Transactions on Medical Robotics and Bionics Symposium Special Issu

    Deep Reinforcement Learning for Concentric Tube Robot Path Following

    Get PDF
    As surgical interventions trend towards minimally invasive approaches, Concentric Tube Robots (CTRs) have been explored for various interventions such as brain, eye, fetoscopic, lung, cardiac, and prostate surgeries. Arranged concentrically, each tube is rotated and translated independently to move the robot end-effector position, making kinematics and control challenging. Classical model-based approaches have been previously investigated with developments in deep learning-based approaches outperforming more classical approaches in both forward kinematics and shape estimation. We propose a deep reinforcement learning approach to control where we generalize across two to four systems, an element not yet achieved in any other deep learning approach for CTRs. In this way, we explore the likely robustness of the control approach. Also investigated is the impact of rotational constraints applied on tube actuation and the effects on error metrics. We evaluate inverse kinematics errors and tracking errors for path-following tasks and compare the results to those achieved using state-of-the-art methods. Additionally, as current results are performed in simulation, we also investigate a domain transfer approach known as domain randomization and evaluate error metrics as an initial step toward hardware implementation. Finally, we compare our method to a Jacobian approach found in the literature

    The Effect of Adaptive Learning Style Scenarios on Learning Achievements

    Get PDF
    Bozhilov, D., Stefanov, K., & Stoyanov, S. (2009). Effect of adaptive learning style scenarios on learning achievements [Special issue]. International Journal of Continuing Engineering Education and Lifelong Learning (IJCEELL), 19(4/5/6), 381-398.The study compares three adaptive learning style scenarios, namely matching, compensating and monitoring. Matching and compensating scenarios operate on a design-time mode, while monitoring applies a run-time adaptation mode. In addition, the study investigates the role of pre-assessment and embedded adaptation controls. To measure the effectiveness of different adaptive scenarios, a software application serving as a test-bed. was developed. An experimental study indicated that the monitoring adaptation led to higher learning achievements when compared to matching and compensating adaptation, although no significant effect was found

    Online estimation of the hand-eye transformation from surgical scenes

    Full text link
    Hand-eye calibration algorithms are mature and provide accurate transformation estimations for an effective camera-robot link but rely on a sufficiently wide range of calibration data to avoid errors and degenerate configurations. To solve the hand-eye problem in robotic-assisted minimally invasive surgery and also simplify the calibration procedure by using neural network method cooporating with the new objective function. We present a neural network-based solution that estimates the transformation from a sequence of images and kinematic data which significantly simplifies the calibration procedure. The network utilises the long short-term memory architecture to extract temporal information from the data and solve the hand-eye problem. The objective function is derived from the linear combination of remote centre of motion constraint, the re-projection error and its derivative to induce a small change in the hand-eye transformation. The method is validated with the data from da Vinci Si and the result shows that the estimated hand-eye matrix is able to re-project the end-effector from the robot coordinate to the camera coordinate within 10 to 20 pixels of accuracy in both testing dataset. The calibration performance is also superior to the previous neural network-based hand-eye method. The proposed algorithm shows that the calibration procedure can be simplified by using deep learning techniques and the performance is improved by the assumption of non-static hand-eye transformations.Comment: 6 pages, 4 main figure

    Effect of adaptive learning style scenarios on learning achievements

    Get PDF
    Bozhilov, D., Stefanov, K., & Stoyanov, S. (2009). Effect of adaptive learning style scenarios on learning achievements [Special issue]. International Journal of Continuing Engineering Education and Lifelong Learning (IJCEELL), 19(4/5/6), 381-398.The study compares three adaptive learning style scenarios, namely matching, compensating and monitoring. Matching and compensating scenarios operate on a design-time mode, while monitoring applies a run-time adaptation mode. In addition, the study investigates the role of pre-assessment and embedded adaptation controls. To measure the effectiveness of different adaptive scenarios, a software application serving as a test-bed. was developed. An experimental study indicated that the monitoring adaptation led to higher learning achievements when compared to matching and compensating adaptation, although no significant effect was found

    FaceOff: Anonymizing Videos in the Operating Rooms

    Get PDF
    Video capture in the surgical operating room (OR) is increasingly possible and has potential for use with computer assisted interventions (CAI), surgical data science and within smart OR integration. Captured video innately carries sensitive information that should not be completely visible in order to preserve the patient's and the clinical teams' identities. When surgical video streams are stored on a server, the videos must be anonymized prior to storage if taken outside of the hospital. In this article, we describe how a deep learning model, Faster R-CNN, can be used for this purpose and help to anonymize video data captured in the OR. The model detects and blurs faces in an effort to preserve anonymity. After testing an existing face detection trained model, a new dataset tailored to the surgical environment, with faces obstructed by surgical masks and caps, was collected for fine-tuning to achieve higher face-detection rates in the OR. We also propose a temporal regularisation kernel to improve recall rates. The fine-tuned model achieves a face detection recall of 88.05 % and 93.45 % before and after applying temporal-smoothing respectively.Comment: MICCAI 2018: OR 2.0 Context-Aware Operating Theater

    FetReg2021: A Challenge on Placental Vessel Segmentation and Registration in Fetoscopy

    Get PDF
    Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to regulate blood exchange among twins. The procedure is particularly challenging due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation. Computer-assisted intervention (CAI) can provide surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision challenge, we released the first largescale multicentre TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. The challenge provided an opportunity for creating generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-centre fetoscopic data, we provide a benchmark for future research in this field
    • …
    corecore